1,926 research outputs found

    Hiding the complexity: building a distributed ATLAS Tier-2 with a single resource interface using ARC middleware

    Get PDF
    Since their inception, Grids for high energy physics have found management of data to be the most challenging aspect of operations. This problem has generally been tackled by the experiment's data management framework controlling in fine detail the distribution of data around the grid and the careful brokering of jobs to sites with co-located data. This approach, however, presents experiments with a difficult and complex system to manage as well as introducing a rigidity into the framework which is very far from the original conception of the grid.<p></p> In this paper we describe how the ScotGrid distributed Tier-2, which has sites in Glasgow, Edinburgh and Durham, was presented to ATLAS as a single, unified resource using the ARC middleware stack. In this model the ScotGrid 'data store' is hosted at Glasgow and presented as a single ATLAS storage resource. As jobs are taken from the ATLAS PanDA framework, they are dispatched to the computing cluster with the fastest response time. An ARC compute element at each site then asynchronously stages the data from the data store into a local cache hosted at each site. The job is then launched in the batch system and accesses data locally.<p></p> We discuss the merits of this system compared to other operational models and consider, from the point of view of the resource providers (sites), and from the resource consumers (experiments); and consider issues involved in transitions to this model

    Multi-core job submission and grid resource scheduling for ATLAS AthenaMP

    Get PDF
    AthenaMP is the multi-core implementation of the ATLAS software framework and allows the efficient sharing of memory pages between multiple threads of execution. This has now been validated for production and delivers a significant reduction on the overall application memory footprint with negligible CPU overhead. Before AthenaMP can be routinely run on the LHC Computing Grid it must be determined how the computing resources available to ATLAS can best exploit the notable improvements delivered by switching to this multi-process model. A study into the effectiveness and scalability of AthenaMP in a production environment will be presented. Best practices for configuring the main LRMS implementations currently used by grid sites will be identified in the context of multi-core scheduling optimisation

    A Longitudinal Mixed Logit Model for Estimation of Push and Pull Effects in Residential Location Choice

    Get PDF
    We develop a random effects discrete choice model for the analysis of households' choice of neighbourhood over time. The model is parameterised in a way that exploits longitudinal data to separate the influence of neighbourhood characteristics on the decision to move out of the current area ("push" effects) and on the choice of one destination over another ("pull" effects). Random effects are included to allow for unobserved heterogeneity between households in their propensity to move, and in the importance placed on area characteristics. The model also includes area-level random effects. The combination of a large choice set, large sample size and repeated observations mean that existing estimation approaches are often infeasible. We therefore propose an effcient MCMC algorithm for the analysis of large-scale datasets. The model is applied in an analysis of residential choice in England using data from the British Household Panel Survey linked to neighbourhood-level census data. We consider how effects of area deprivation and distance from the current area depend on household characteristics and life course transitions in the previous year. We find substantial differences between households in the effects of deprivation on out-mobility and selection of destination, with evidence of severely constrained choices among less-advantaged households

    Quattor: Tools and Techniques for the Configuration, Installation and Management of Large-Scale Grid Computing Fabrics

    Get PDF
    This paper describes the quattor tool suite, a new system for the installation, configuration, and management of operating systems and application software for computing fabrics. At present Unix derivatives such as Linux and Solaris are supported. Quattor is a powerful, portable and modular open source solution that has been shown to scale to thousands of computing nodes and offers a significant reduction in management costs for large computing fabrics. The quattor tool suite includes innovations compared to existing solutions which make it very useful for computing fabrics integrated into grid environments. Evaluations of the tool suite in current large scale computing environments are presented

    Establishing Applicability of SSDs to LHC Tier-2 Hardware Configuration

    Get PDF
    Solid State Disk technologies are increasingly replacing high-speed hard disks as the storage technology in high-random-I/O environments. There are several potentially I/O bound services within the typical LHC Tier-2 - in the back-end, with the trend towards many-core architectures continuing, worker nodes running many single-threaded jobs and storage nodes delivering many simultaneous files can both exhibit I/O limited efficiency. We estimate the effectiveness of affordable SSDs in the context of worker nodes, on a large Tier-2 production setup using both low level tools and real LHC I/O intensive data analysis jobs comparing and contrasting with high performance spinning disk based solutions. We consider the applicability of each solution in the context of its price/performance metrics, with an eye on the pragmatic issues facing Tier-2 provision and upgradesComment: 6 pages, 1 figure, 4 tables. Conference proceedings for CHEP201
    corecore